470 research outputs found

    Bayesian inference for inverse problems

    Get PDF
    Traditionally, the MaxEnt workshops start by a tutorial day. This paper summarizes my talk during 2001'th workshop at John Hopkins University. The main idea in this talk is to show how the Bayesian inference can naturally give us all the necessary tools we need to solve real inverse problems: starting by simple inversion where we assume to know exactly the forward model and all the input model parameters up to more realistic advanced problems of myopic or blind inversion where we may be uncertain about the forward model and we may have noisy data. Starting by an introduction to inverse problems through a few examples and explaining their ill posedness nature, I briefly presented the main classical deterministic methods such as data matching and classical regularization methods to show their limitations. I then presented the main classical probabilistic methods based on likelihood, information theory and maximum entropy and the Bayesian inference framework for such problems. I show that the Bayesian framework, not only generalizes all these methods, but also gives us natural tools, for example, for inferring the uncertainty of the computed solutions, for the estimation of the hyperparameters or for handling myopic or blind inversion problems. Finally, through a deconvolution problem example, I presented a few state of the art methods based on Bayesian inference particularly designed for some of the mass spectrometry data processing problems.Comment: Presented at MaxEnt01. To appear in Bayesian Inference and Maximum Entropy Methods, B. Fry (Ed.), AIP Proceedings. 20pages, 13 Postscript figure

    Penalized maximum likelihood for multivariate Gaussian mixture

    Full text link
    In this paper, we first consider the parameter estimation of a multivariate random process distribution using multivariate Gaussian mixture law. The labels of the mixture are allowed to have a general probability law which gives the possibility to modelize a temporal structure of the process under study. We generalize the case of univariate Gaussian mixture in [Ridolfi99] to show that the likelihood is unbounded and goes to infinity when one of the covariance matrices approaches the boundary of singularity of the non negative definite matrices set. We characterize the parameter set of these singularities. As a solution to this degeneracy problem, we show that the penalization of the likelihood by an Inverse Wishart prior on covariance matrices results to a penalized or maximum a posteriori criterion which is bounded. Then, the existence of positive definite matrices optimizing this criterion can be guaranteed. We also show that with a modified EM procedure or with a Bayesian sampling scheme, we can constrain covariance matrices to belong to a particular subclass of covariance matrices. Finally, we study degeneracies in the source separation problem where the characterization of parameter singularity set is more complex. We show, however, that Inverse Wishart prior on covariance matrices eliminates the degeneracies in this case too.Comment: Presented at MaxEnt01. To appear in Bayesian Inference and Maximum Entropy Methods, B. Fry (Ed.), AIP Proceedings. 11pages, 3 Postscript figure

    Information and Covariance Matrices for Multivariate Burr III and Logistic distributions

    Full text link
    Main result of this paper is to derive the exact analytical expressions of information and covariance matrices for multivariate Burr III and logistic distributions. These distributions arise as tractable parametric models in price and income distributions, reliability, economics, populations growth and survival data. We showed that all the calculations can be obtained from one main moment multi dimensional integral whose expression is obtained through some particular change of variables. Indeed, we consider that this calculus technique for improper integral has its own importance in applied probability calculus.Comment: submitted to Communications in Statistic

    An alternative inference tool to total probability formula and its applications

    Full text link
    Total probability and Bayes formula are two basic tools for using prior information in the Bayesian statistics. In this paper we introduce an alternative tool for using prior information. This new toold enables us to improve some traditional results in statistical inference. However, as far as the authors know, there is no work on this subject, except [1]. The results of this paper can be extended to other branches of probability and statistics. In Section 2 total probability formula based on median is defined and its basic properties are proved. A few applications of this new tool are given in Section 3.Comment: Presented at the 23th Int. worskhop on Bayesian and Maximum Entropy methods (MaxEnt23), Aug. 3-7, 2003, Jackson Hole, US
    corecore